11. Lab: Layer Concatenation
Layer Concatenation
In the last lesson, you covered skip connections which are a great way to retain some of the finer details from the previous layers as we decode or upsample the layers to the original size. Previously, we discussed one way to carry this out, using an element-wise addition operation to add two layers. We will go over another simple technique now where we will concatenate two layers instead of adding them.
Concatenating two layers, the upsampled layer and a layer with more spatial information than the upsampled one, presents us with the same functionality. Implementing this functionality is quite straightforward as well.
Using the tf.contrib.keras
function definition, it can be implemented as follows:
from tensorflow.contrib.keras.python.keras import layers
output = layers.concatenate(inputs)
Where,
inputs
is a list of the layers that you are concatenating.
In the lab's notebook, you will be able to implement the above in the decoder_block()
function using a similar format as follows:
output = layers.concatenate([input_layer_1, input_layer_2])
One added advantage of concatenating the layers is that it offers a bit of flexibility because the depth of the input layers need not match up unlike when you have to add them. Which helps simplify the implementation as well.
While layer concatenation in itself is helpful for your model, it can often be better to add some regular or separable convolution layers after this step for your model to be able to learn those finer spatial details from the previous layers better. In the lab, you will be able to try this out yourself!
In the next section, we will cover an important metric that can help gauge the performance of a semantic segmentation network!